Goto

Collaborating Authors

 mobile network


AI/ML in 3GPP 5G Advanced -- Services and Architecture

Taksande, Pradnya, Kiran, Shwetha, Jha, Pranav, Chaporkar, Prasanna

arXiv.org Artificial Intelligence

Abstract--The 3rd Generation Partnership Project (3GPP), the standards body for mobile networks, is in the final phase of Release 19 standardization and is beginning Release 20. Artificial Intelligence/ Machine Learning (AI/ML) has brought about a paradigm shift in technology and it is being adopted across industries and verticals. This paper focuses on the AI/ML related technological advancements and features introduced in Release 19 within the Service and System Aspects (SA) T echnical specifications group of 3GPP . The advancements relate to two paradigms: (i) enhancements that AI/ML brought to the 5G advanced system (AI for network), e.g. Artificial Intelligence (AI) and Machine Learning (ML) are transforming numerous industries and multiple aspects of modern life. From personalized recommendations on streaming platforms to real-time fraud detection in banking, AI/ML technologies are driving smarter decision-making across industries. In retail, they assist in inventory and supply chain management. In transportation, automotive vehicles rely on ML for object detection and navigation. As data continues to grow, these technologies are evolving rapidly, reshaping how we work, interact, and solve complex problems, making them central to innovation in today's world.


Continual Learning to Generalize Forwarding Strategies for Diverse Mobile Wireless Networks

Park, Cheonjin, Manfredi, Victoria, Zhang, Xiaolan, Liu, Chengyi, Wolfe, Alicia P, Song, Dongjin, Tasneem, Sarah, Wang, Bing

arXiv.org Artificial Intelligence

Deep reinforcement learning (DRL) has been successfully used to design forwarding strategies for multi-hop mobile wireless networks. While such strategies can be used directly for networks with varied connectivity and dynamic conditions, developing generalizable approaches that are effective on scenarios significantly different from the training environment remains largely unexplored. In this paper, we propose a framework to address the challenge of generalizability by (i) developing a generalizable base model considering diverse mobile network scenarios, and (ii) using the generalizable base model for new scenarios, and when needed, fine-tuning the base model using a small amount of data from the new scenarios. To support this framework, we first design new features to characterize network variation and feature quality, thereby improving the information used in DRL-based forwarding decisions. We then develop a continual learning (CL) approach able to train DRL models across diverse network scenarios without ``catastrophic forgetting.'' Using extensive evaluation, including real-world scenarios in two cities, we show that our approach is generalizable to unseen mobility scenarios. Compared to a state-of-the-art heuristic forwarding strategy, it leads to up to 78% reduction in delay, 24% improvement in delivery rate, and comparable or slightly higher number of forwards.


MobiGPT: A Foundation Model for Mobile Wireless Networks

Qi, Xiaoqian, Chai, Haoye, Li, Yong

arXiv.org Artificial Intelligence

With the rapid development of mobile communication technologies, future mobile networks will offer vast services and resources for commuting, production, daily life, and entertainment. Accurate and efficient forecasting of mobile data (e.g., cell traffic, user behavior, channel quality) helps operators monitor network state changes, orchestrate wireless resources, and schedule infrastructure and users, thereby improving supply efficiency and service quality. However, current forecasting paradigms rely on customized designs with tailored models for exclusive data types. Such approaches increase complexity and deployment costs under large-scale, heterogeneous networks involving base stations, users, and channels. In this paper, we design a foundation model for mobile data forecasting, MobiGPT, with a unified structure capable of forecasting three data types: base station traffic, user app usage, and channel quality. We propose a soft-prompt learning method to help the model understand features of different data types, and introduce a temporal masking mechanism to guide the model through three forecasting tasks: short-term prediction, long-term prediction, and distribution generation, supporting diverse optimization scenarios. Evaluations on real-world datasets with over 100,000 samples show that MobiGPT achieves accurate multi-type forecasting. Compared to existing models, it improves forecasting accuracy by 27.37%, 20.08%, and 7.27%, reflecting strong generalization. Moreover, MobiGPT exhibits superior zero/few-shot performance in unseen scenarios, with over 21.51% improvement, validating its strong transferability as a foundation model.


Reasoning Language Models for Root Cause Analysis in 5G Wireless Networks

Sana, Mohamed, Piovesan, Nicola, De Domenico, Antonio, Kang, Yibin, Zhang, Haozhe, Debbah, Merouane, Ayed, Fadhel

arXiv.org Artificial Intelligence

Root Cause Analysis (RCA) in mobile networks remains a challenging task due to the need for interpretability, domain expertise, and causal reasoning. In this work, we propose a lightweight framework that leverages Large Language Models (LLMs) for RCA. To do so, we introduce TeleLogs, a curated dataset of annotated troubleshooting problems designed to benchmark RCA capabilities. Our evaluation reveals that existing open-source reasoning LLMs struggle with these problems, underscoring the need for domain-specific adaptation. To address this issue, we propose a two-stage training methodology that combines supervised fine-tuning with reinforcement learning to improve the accuracy and reasoning quality of LLMs. The proposed approach fine-tunes a series of RCA models to integrate domain knowledge and generate structured, multi-step diagnostic explanations, improving both interpretability and effectiveness. Extensive experiments across multiple LLM sizes show significant performance gains over state-of-the-art reasoning and non-reasoning models, including strong generalization to randomized test variants. These results demonstrate the promise of domain-adapted, reasoning-enhanced LLMs for practical and explainable RCA in network operation and management.


From Cell Towers to Satellites: A 2040 Blueprint for Urban-Grade Direct-to-Device Mobile Networks

Elgueta, Sebastian Barros

arXiv.org Artificial Intelligence

In 2023, satellite and mobile networks crossed a historic threshold: standard smartphones, using unmodified 3GPP protocols, connected directly to low Earth orbit (LEO) satellites. This first wave of direct-to-device (D2D) demonstrations validated the physical feasibility of satellite-based mobile access. However, these systems remain fallback-grade--rural-only, bandwidth-limited, and fully dependent on Earth-based mobile cores for identity, session, and policy control. This paper asks a more ambitious question: Can a complete mobile network, including radio access, core functions, traffic routing, and content delivery, operate entirely from orbit? And can it deliver sustained, urban-grade service in the world's densest cities? We present the first end-to-end system architecture for a fully orbital telco, integrating electronically steered phased arrays with 1000-beam capacity, space-based deployment of 5G core functions (UPF, AMF), and inter-satellite laser mesh backhaul. We analyze spectral efficiency, beam capacity, and link budgets under dense urban conditions, accounting for path loss, Doppler, and multipath. Simulations show that rooftop and line-of-sight users can sustain 64-QAM throughput, while street-level access is feasible with relay or assisted beam modes. The paper outlines the remaining constraints, power, thermal dissipation, compute radiation hardening, and regulatory models, and demonstrates that these are engineering bottlenecks, not physical limits. Finally, we propose a staged 15-year roadmap from today's fallback D2D systems to autonomous orbital overlays delivering 50-100 Mbps to handhelds in megacities, with zero reliance on terrestrial infrastructure.


Towards Cognitive Service Delivery on B5G through AIaaS Architecture

Moreira, Larissa F. Rodrigues, Moreira, Rodrigo, Silva, Flávio de Oliveira, Backes, André R.

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) is pivotal in advancing mobile network systems by facilitating smart capabilities and automation. The transition from 4G to 5G has substantial implications for AI in consolidating a network predominantly geared towards business verticals. In this context, 3GPP has specified and introduced the Network Data Analytics Function (NWDAF) entity at the network's core to provide insights based on AI algorithms to benefit network orchestration. This paper proposes a framework for evolving NWDAF that presents the interfaces necessary to further empower the core network with AI capabilities B5G and 6G. In addition, we identify a set of research directions for realizing a distributed e-NWDAF.


Distributed Collaborative Inference System in Next-Generation Networks and Communication

Zhang, Chuan, Zheng, Xixi, Tao, Xiaolong, Hu, Chenfei, Zhang, Weiting, Zhu, Liehuang

arXiv.org Artificial Intelligence

With the rapid advancement of artificial intelligence, generative artificial intelligence (GAI) has taken a leading role in transforming data processing methods. However, the high computational demands of GAI present challenges for devices with limited resources. As we move towards the sixth generation of mobile networks (6G), the higher data rates and improved energy efficiency of 6G create a need for more efficient data processing in GAI. Traditional GAI, however, shows its limitations in meeting these demands. To address these challenges, we introduce a multi-level collaborative inference system designed for next-generation networks and communication. Our proposed system features a deployment strategy that assigns models of varying sizes to devices at different network layers. Then, we design a task offloading strategy to optimise both efficiency and latency. Furthermore, a modified early exit mechanism is implemented to enhance the inference process for single models. Experimental results demonstrate that our system effectively reduces inference latency while maintaining high-quality output. Specifically, compared to existing work, our system can reduce inference time by up to 17% without sacrificing the inference accuracy.


FoMo: A Foundation Model for Mobile Traffic Forecasting with Diffusion Model

Chai, Haoye, Zhang, Shiyuan, Qi, Xiaoqian, Li, Yong

arXiv.org Artificial Intelligence

Mobile traffic forecasting allows operators to anticipate network dynamics and performance in advance, offering substantial potential for enhancing service quality and improving user experience. However, existing models are often task-oriented and are trained with tailored data, which limits their effectiveness in diverse mobile network tasks of Base Station (BS) deployment, resource allocation, energy optimization, etc. and hinders generalization across different urban environments. Foundation models have made remarkable strides across various domains of NLP and CV due to their multi-tasking adaption and zero/few-shot learning capabilities. In this paper, we propose an innovative Foundation model for Mo}bile traffic forecasting (FoMo), aiming to handle diverse forecasting tasks of short/long-term predictions and distribution generation across multiple cities to support network planning and optimization. FoMo combines diffusion models and transformers, where various spatio-temporal masks are proposed to enable FoMo to learn intrinsic features of different tasks, and a contrastive learning strategy is developed to capture the correlations between mobile traffic and urban contexts, thereby improving its transfer learning capability. Extensive experiments on 9 real-world datasets demonstrate that FoMo outperforms current models concerning diverse forecasting tasks and zero/few-shot learning, showcasing a strong universality. We further deploy the FoMo on the JiuTian optimization platform of China Mobile, where we use the predicted mobile data to formulate network planning and optimization applications, including BS deployment, resource block scheduling, and BS sleep control.


Future-Proofing Mobile Networks: A Digital Twin Approach to Multi-Signal Management

Morabito, Roberto, Pandey, Bivek, Daubaris, Paulius, Wanigarathna, Yasith R, Tarkoma, Sasu

arXiv.org Artificial Intelligence

Digital Twins (DTs) are set to become a key enabling technology in future wireless networks, with their use in network management increasing significantly. We developed a DT framework that leverages the heterogeneity of network access technologies as a resource for enhanced network performance and management, enabling smart data handling in the physical network. Tested in a Campus Area Network environment, our framework integrates diverse data sources to provide real-time, holistic insights into network performance and environmental sensing. We also envision that traditional analytics will evolve to rely on emerging AI models, such as Generative AI (GenAI), while leveraging current analytics capabilities. This capacity can simplify analytics processes through advanced ML models, enabling descriptive, diagnostic, predictive, and prescriptive analytics in a unified fashion. Finally, we present specific research opportunities concerning interoperability aspects and envision aligning advancements in DT technology with evolved AI integration.


Large Language Model-Driven Curriculum Design for Mobile Networks

Erak, Omar, Alhussein, Omar, Naser, Shimaa, Alabbasi, Nouf, Mi, De, Muhaidat, Sami

arXiv.org Artificial Intelligence

This study introduces an innovative framework that employs large language models (LLMs) to automate the design and generation of curricula for reinforcement learning (RL). As mobile networks evolve towards the 6G era, managing their increasing complexity and dynamic nature poses significant challenges. Conventional RL approaches often suffer from slow convergence and poor generalization due to conflicting objectives and the large state and action spaces associated with mobile networks. To address these shortcomings, we introduce curriculum learning, a method that systematically exposes the RL agent to progressively challenging tasks, improving convergence and generalization. However, curriculum design typically requires extensive domain knowledge and manual human effort. Our framework mitigates this by utilizing the generative capabilities of LLMs to automate the curriculum design process, significantly reducing human effort while improving the RL agent's convergence and performance. We deploy our approach within a simulated mobile network environment and demonstrate improved RL convergence rates, generalization to unseen scenarios, and overall performance enhancements. As a case study, we consider autonomous coordination and user association in mobile networks. Our obtained results highlight the potential of combining LLM-based curriculum generation with RL for managing next-generation wireless networks, marking a significant step towards fully autonomous network operations.